Univariate Polynomial Inference by Monte Carlo Message Length Approximation

نویسندگان

  • Leigh J. Fitzgibbon
  • David L. Dowe
  • Lloyd Allison
چکیده

We apply the Message from Monte Carlo (MMC) algorithm to inference of univariate polynomials. MMC is an algorithm for point estimation from a Bayesian posterior sample. It partitions the posterior sample into sets of regions that contain similar models. Each region has an associated message length (given by Dowe’s MMLD approximation) and a point estimate that is representative of models in the region. The regions and point estimates are chosen so that the KullbackLeibler distance between models in the region and the associated point estimate is small (using Wallace’s FSMML Boundary Rule). We compare the MMC algorithm’s point estimation performance with Minimum Message Length [12] and Structural Risk Minimisation on a set of ten polynomial and nonpolynomial functions with Gaussian noise. The orthonormal polynomial parameters are sampled using reversible jump Markov chain Monte Carlo methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Estimating parameters in stochastic systems: A variational Bayesian

8 This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such proce...

متن کامل

The Bias - Variance dilemma of the Monte

We investigate the setting in which Monte Carlo methods are used and draw a parallel to the formal setting of statistical inference. In particular, we nd that Monte Carlo approximation gives rise to a bias-variance dilemma. We show that it is possible to construct a biased approximation scheme with a lower approximation error than a related unbiased algorithm.

متن کامل

Optimal Monte Carlo Estimation of Belief Network Inference

We present two Monte Carlo sampling algo­ rithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms pro­ vide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximati...

متن کامل

Bayesian Posterior Comprehension via Message from Monte Carlo

We discuss the problem of producing an epitome, or brief summary, of a Bayesian posterior distribution and then investigate a general solution based on the Minimum Message Length (MML) principle. Clearly, the optimal criterion for choosing such an epitome is determined by the epitome’s intended use. The interesting general case is where this use is unknown since, in order to be practical, the c...

متن کامل

Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations

We construct a new framework for accelerating Markov chain Monte Carlo in posterior sampling problems where standard methods are limited by the computational cost of the likelihood, or of numerical models embedded therein. Our approach introduces local approximations of these models into the Metropolis-Hastings kernel, borrowing ideas from deterministic approximation theory, optimization, and e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002